Because building optimal fuzzy decision tree is np-hard, it is necessary to study the heuristics 由于构建最优的模糊决策树是np-hard,因此,针对启发式算法的研究是非常必要的。
There are some questions that need be deeply researched . for example, the material application of dynamic fuzzy decision tree, the operation of decision rules etc 尽管如此,本文的工作还很基础,今后还有许多工作需做进一步研究,如动态模糊决策树的具体应用、规则的提取等。
But, fuzzy decision tree induction is an important way for learning from examples with fuzzy representation . it is a special case of fuzzy decision tree induction extracting rules from the data, which have symbol features and crisp classes 而模糊决策树归纳是从具有模糊表示的示例中学习规则的一种重要方法,从符号值属性类分明的数据中提取规则可视为模糊决策树归纳的一种特殊情况。
But, fuzzy decision tree induction is an important way for learning from examples with fuzzy representation . it is a special case of fuzzy decision tree induction extracting rules from the data, which have symbol features and crisp classes 而模糊决策树归纳是从具有模糊表示的示例中学习规则的一种重要方法,从符号值属性类分明的数据中提取规则可视为模糊决策树归纳的一种特殊情况。
(2 ) the reasonable describing about dynamic fuzzy decision tree from attributes treatment to building the tree and then pruning tree, and it provides a certain extent stated theory foundation for ulteriorly researching dynamic fuzzy decision tree and establishes a main concept frame of dynamic fuzzy decision tree (2)对动态模糊决策树从属性处理到构建以及剪枝给出了合理的描述,形成了动态模糊决策树的基本概念框架。
(2 ) the reasonable describing about dynamic fuzzy decision tree from attributes treatment to building the tree and then pruning tree, and it provides a certain extent stated theory foundation for ulteriorly researching dynamic fuzzy decision tree and establishes a main concept frame of dynamic fuzzy decision tree (2)对动态模糊决策树从属性处理到构建以及剪枝给出了合理的描述,形成了动态模糊决策树的基本概念框架。
(2 ) the reasonable describing about dynamic fuzzy decision tree from attributes treatment to building the tree and then pruning tree, and it provides a certain extent stated theory foundation for ulteriorly researching dynamic fuzzy decision tree and establishes a main concept frame of dynamic fuzzy decision tree (2)对动态模糊决策树从属性处理到构建以及剪枝给出了合理的描述,形成了动态模糊决策树的基本概念框架。
By analyzing expression between a and fuzzy entropy from the view of analytics, this paper analyses the relationship of between a and fuzzy entropy and the changing trend of fuzzy entropy function with the increase of a, then discusses the sensitivity of the parameter a to classification result such as total nodes, rule number, classification accuracy of fuzzy decision tree, proposes an experimental method of obtaining optimal a, it is proved by experiment that the optimal value a obtained by this method can make the classification result of fuzzy decision tree best, and therefore provides the academic evidence of selecting parameter a in order to gain the best classification result 本文在visualc++软件开发平台及模糊id3算法的基础上,从解析的角度出发,通过分析参数与模糊熵之间的函数关系式,讨论了随着的增加,模糊熵函数的变化趋势,进一步分析了参数对模糊决策树的分类结果在训练准确率、测试准确率、规则数等方面所表现出的敏感性,探讨了得到最优参数的实验方法。实验证明,利用这一方法得到的最优参数的值,可以使模糊决策树的分类结果达到最好的效果,从而为人们用模糊决策树进行分类时选取参数以获得最优的分类结果,提供了良好的理论依据。
By analyzing expression between a and fuzzy entropy from the view of analytics, this paper analyses the relationship of between a and fuzzy entropy and the changing trend of fuzzy entropy function with the increase of a, then discusses the sensitivity of the parameter a to classification result such as total nodes, rule number, classification accuracy of fuzzy decision tree, proposes an experimental method of obtaining optimal a, it is proved by experiment that the optimal value a obtained by this method can make the classification result of fuzzy decision tree best, and therefore provides the academic evidence of selecting parameter a in order to gain the best classification result 本文在visualc++软件开发平台及模糊id3算法的基础上,从解析的角度出发,通过分析参数与模糊熵之间的函数关系式,讨论了随着的增加,模糊熵函数的变化趋势,进一步分析了参数对模糊决策树的分类结果在训练准确率、测试准确率、规则数等方面所表现出的敏感性,探讨了得到最优参数的实验方法。实验证明,利用这一方法得到的最优参数的值,可以使模糊决策树的分类结果达到最好的效果,从而为人们用模糊决策树进行分类时选取参数以获得最优的分类结果,提供了良好的理论依据。
In building fuzzy decision tree, each expanded attribute ca n't classify the class label clearly like decision tree, but the cases covered with the attribute-values have some overlap . so the entire process of building trees is based on a significant level a, the import of a can reduce such overlap in some degree, decrease the uncertainty of classification and improve classification result 在模糊决策树的产生过程中,用模糊熵选择的扩展属性不能像经典决策树那样将类清晰的分开,而是属性术语所覆盖的例子之间有一定的重叠,因此树的整个产生过程在给定的显著性水平的基础上进行,参数的引入能在一定程度上减少这种重叠,从而减少分类的不确定性,提高模糊决策树的分类结果。